head direction
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
- Information Technology > Modeling & Simulation (0.91)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > United States > Iowa (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology (0.93)
- Food & Agriculture > Agriculture (0.69)
Appendix
Fitting T1-mGPLVM to the binned spike data, we found that the inferred latent state was highly correlated with the true head direction (Figure 5b). Here we make this connection more explicit. As described in the main text, the Lie algebrag of a groupG is a vector space tangent toG at its identity element. However,because the Lie algebra is isomorphic toRn, we have found it convenient in both our exposition and our implementation to work directly with the pair(Rn,ExpG), instead of(g,expG). We begin by noting thatSn is not a Lie group unlessn = 1 or n = 3, thus we can only apply the ReLie framework toS1 and S3.
Long-TailedClassificationbyKeepingtheGoodand RemovingtheBadMomentumCausalEffect
Therefore, long-tailed classification is the key to deep learning at scale. However, existing methods are mainly based on reweighting/re-sampling heuristics that lack a fundamental theory. In this paper, weestablish acausal inference framework,which notonlyunravelsthewhysof previous methods, but also derives a new principled solution.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
Manifold GPLVMs for discovering non-Euclidean latent structure in neural data
A common problem in neuroscience is to elucidate the collective neural representations of behaviorally important variables such as head direction, spatial location, upcoming movements, or mental spatial transformations. Often, these latent variables are internal constructs not directly accessible to the experimenter. Here, we propose a new probabilistic latent variable model to simultaneously identify the latent state and the way each neuron contributes to its representation in an unsupervised way. In contrast to previous models which assume Euclidean latent spaces, we embrace the fact that latent states often belong to symmetric manifolds such as spheres, tori, or rotation groups of various dimensions. We therefore propose the manifold Gaussian process latent variable model (mGPLVM), where neural responses arise from (i) a shared latent variable living on a specific manifold, and (ii) a set of non-parametric tuning curves determining how each neuron contributes to the representation. Cross-validated comparisons of models with different topologies can be used to distinguish between candidate manifolds, and variational inference enables quantification of uncertainty. We demonstrate the validity of the approach on several synthetic datasets, as well as on calcium recordings from the ellipsoid body of Drosophila melanogaster and extracellular recordings from the mouse anterodorsal thalamic nucleus. These circuits are both known to encode head direction, and mGPLVM correctly recovers the ring topology expected from neural populations representing a single angular variable.
- North America > United States > Washington > King County > Bellevue (0.04)
- North America > United States > New York (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Supplementary Material: M M COWS: A Multimodal Dataset for Dairy Cattle Monitoring
This document provides additional details that complement the main paper. We discuss the steps used to synchronize and calibrate the visual data in Section A. Section B elaborates on the details of UWB localization, heading direction estimation, and obtaining the reference for lying behavior. We keep the order of figures, tables, and equations in numerical, and refer to them independently from the main paper unless explicitly stated otherwise. The paper checklist is attached as the final part of the main paper. We discuss additional details of processing the visual data and calibrating four camera views.
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > United States > Iowa (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Food & Agriculture > Agriculture (1.00)
- Information Technology (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
- Information Technology > Modeling & Simulation (0.71)
Manifold GPLVMs for discovering non-Euclidean latent structure in neural data
A common problem in neuroscience is to elucidate the collective neural representations of behaviorally important variables such as head direction, spatial location, upcoming movements, or mental spatial transformations. Often, these latent variables are internal constructs not directly accessible to the experimenter. Here, we propose a new probabilistic latent variable model to simultaneously identify the latent state and the way each neuron contributes to its representation in an unsupervised way. In contrast to previous models which assume Euclidean latent spaces, we embrace the fact that latent states often belong to symmetric manifolds such as spheres, tori, or rotation groups of various dimensions. We therefore propose the manifold Gaussian process latent variable model (mGPLVM), where neural responses arise from (i) a shared latent variable living on a specific manifold, and (ii) a set of non-parametric tuning curves determining how each neuron contributes to the representation. Cross-validated comparisons of models with different topologies can be used to distinguish between candidate manifolds, and variational inference enables quantification of uncertainty.
A*Net and NBFNet Learn Negative Patterns on Knowledge Graphs
Betz, Patrick, Stelzner, Nathanael, Meilicke, Christian, Stuckenschmidt, Heiner, Bartelt, Christian
In this technical report, we investigate the predictive performance differences of a rule-based approach and the GNN architectures NBFNet and A*Net with respect to knowledge graph completion. For the two most common benchmarks, we find that a substantial fraction of the performance difference can be explained by one unique negative pattern on each dataset that is hidden from the rule-based approach. Our findings add a unique perspective on the performance difference of different model classes for knowledge graph completion: Models can achieve a predictive performance advantage by penalizing scores of incorrect facts opposed to providing high scores for correct facts.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > Germany (0.04)